Shadow banning, also known as stealth banning, hell banning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned Comments section posted to a blog or media website would be visible to the sender, but not to other users accessing the site.
The phrase "shadow banning" has a colloquial history and has undergone some evolution of usage. It originally applied to a deceptive sort of account suspension on web forums, where a person would appear to be able to post while actually having all of their content hidden from other users. In 2022, the term has come to apply to alternative measures, particularly visibility measures like delisting and downranking.
By partly concealing, or making a user's contributions invisible or less prominent to other members of the service, the hope may be that in the absence of reactions to their comments, the problematic or otherwise out-of-favour user will become bored or frustrated and leave the site, and that Spamming and Internet troll will be discouraged from continuing their unwanted behavior or creating new accounts.
The term "shadow ban" is believed to have originated with moderators on the website Something Awful in 2001, although the feature was only used briefly and sparsely.
Michael Pryor of Fog Creek Software described stealth banning for online forums in 2006, saying how such a system was in place in the project management system FogBugz, "to solve the problem of how do you get the person to go away and leave you alone". As well as preventing problem users from engaging in , the system also discouraged , who if they returned to the site would be under the false impression that their spam was still in place. The Verge describes it as "one of the oldest moderation tricks in the book", noting that early versions of vBulletin had a global ignore list known as "Tachy goes to Coventry", as in the British expression "to send someone to Coventry", meaning to ignore them and pretend they do not exist.
A 2012 update to Hacker News introduced a system of "hellbanning" for spamming and abusive behavior.
Early on, Reddit implemented (and continues to practice) shadow banning, purportedly to address spam accounts. In 2015, Reddit added an account suspension feature that was said to have replaced its sitewide shadowbans, though moderators can still shadowban users from their individual subreddits via their AutoModerator configuration as well as manually. A Reddit user was accidentally shadow banned for one year in 2019, subsequently they contacted support and their comments were restored.
A study of tweets written in a one-year period during 2014 and 2015 found that over a quarter million tweets had been censored in Turkey via shadow banning.
Craigslist has also been known to "ghost" a user's individual ads, whereby the poster gets a confirmation email and may view the ad in their account, but the ad fails to show up in the appropriate category page.
WeChat was found in 2016 to have banned, without any notification to the user, posts and messages that contain various combinations of at least 174 keywords, including "习包子" (Xi Jinping Baozi), "六四天安门" (June 4 Tiananmen), "藏青会" (Tibetan Youth Congress), and "ئاللاھ يولىدا" (in the way of Allah). Messages containing these keywords would appear to have been sent successfully but would not be visible on the receiving end.
In 2017, the phenomenon was noticed on Instagram, with posts which included specific hashtags not showing up when those hashtags were used in searches.
In December 2023, Human Rights Watch echoed the complaints of many Instagram and Facebook users who alleged a drastic reduction in visits to their posts and profiles when the content they posted was about Palestine or the Gaza Genocide, without prior notification from Meta Platforms. The Markup's investigation confirmed that posts with war-related imagery or pro-Palestine hashtags were demoted. Hashtags like "#Palestine" or "#AlAqsa" were supressed from the "Top Posts" section. Meta responded by claiming that this was due to a bug on the platform, which led to criticism about possible Bias in the algorithm.
Because the shadow ban happens without the user being informed, the incorrectly banned user will not have a chance to contact the platform to revert it, unless the user finds out by their own means.
Another instance where shadow bans are problematic is when a user actually broke a rule, but it was in an unintended way, or in a way that did not imply bad intention nor was damaging for the Online community. For example, a user wrote a comment in an online platform, and the comment contained an URL to some legit not spamming source. The algorithm patrolling comments, instead of informing the user that URLs were not allowed and preventing the user from posting, would let the user post the comment with the URL but hiding the comment for everyone to see except the original poster.
If the user was informed beforehand about the rules broken, the user would have been able to write an compliant comment and avoid the shadow ban.
A wrongful ban is always undesired regardless of the reason, and creates an erosion of trust that disincentivises the user from further engaging with the platform. This is a negative loss when the user shadow banned was a good contributor to the platform.
In the European Union, the Digital Services Act (DSA) contains Article 17 that directly addresses moderation practices and service restrictions, forcing platforms to disclose the reasons for such restrictions:
Providers of hosting services shall provide a clear and specific statement of reasons to any affected recipients of the service for any of the following restrictions imposed on the ground that the information provided by the recipient of the service is illegal content or incompatible with their terms and conditions.
In 2024 a Dutch user of Twitter, under the DSA, sued this platform with the European small claims procedure in the Amsterdam District Court for breach of contract, and won the case. The plaintiff claimed that under the Article 17 DSA Twitter failed to proactively notify and provide a "clear and specific statement of reasons" for the demotion of his account, as it is required by this article. On its defence, Twitter claimed that in its terms and conditions there were clauses that allow them to modify access to functionalities and other obligations at any time. But the court deemed these clauses non-binding under the Unfair Terms in Consumer Contracts Directive, and hence dismissed its defence.
Another legal implication is a perceived violation of freedom of speech, depending on how this principle is codified in regulations around the world. In the European Union the DSA effectively bans shadow banning because Article 17 requires platforms to always disclose the reasons for a ban or restriction. In practice however this is not enforced most of the time. Conversely, the First Amendment to the United States Constitution does not protect users' freedom of speech from shadow banning because its coverage only applies to American government interference, not to third-party private entities (like social networks).
During the 2020 Twitter account hijackings, hackers successfully managed to obtain access to Twitter's internal moderation tools via both social engineering and bribing a Twitter employee. Through this, images were leaked of an internal account summary page, which in turn revealed user "flags" set by the system that confirmed the existence of shadow bans on Twitter. Accounts were flagged with terms such as "Trends Blacklisted" and "Search Blacklisted" implying that the user was not able to publicly trend, or show up in public search results. After the situation was dealt with, Twitter faced accusations of censorship with claims that they were trying to hide the existence of shadow bans by removing tweets that contained images of the internal staff tools used. However, Twitter claimed they were removed as they revealed sensitive user information.
On December 8, 2022, the second thread of the Twitter Files—a series of Twitter threads based on internal Twitter, Inc. documents shared by owner Elon Musk with independent journalists Matt Taibbi and Bari Weiss—addressed a practice referred to as "visibility filtering" by previous Twitter management. The functionality included tools allowing accounts to be tagged as "Do not amplify", and under "blacklists" that reduce their prominence in search results and trending topics. It was also revealed that certain conservative accounts, such as the far-right Libs of TikTok, had been given a warning stating that decisions regarding them should only be made by Twitter's Site Integrity Policy, Policy Escalation Support (SIP–PES) team—which consists primarily of high-ranking officials. The functions were given by Musk and other critics as examples of "shadow banning".
To explain why users may come to believe they are subject to "shadow bans" even when they are not, Elaine Moore of the Financial Times writes:
|
|